Versions:
CLI Proxy API 6.9.19, released by developer Luis Pater after 468 incremental builds, is a lightweight proxy server whose core purpose is to expose Google Gemini CLI, OpenAI Codex and Anthropic Claude Code as a unified, OpenAI-compatible REST endpoint so that any HTTP-speaking IDE extension, CLI helper or automation script can invoke cutting-edge models such as Gemini 2.5 Pro, GPT-5 or Claude without rewriting its calling code. Written in Go and configured through a single YAML file, the program listens on a local port, translates incoming ChatGPT-style JSON into each provider’s native CLI invocation, and streams the generated text back to the caller, automatically cycling through multiple free-tier accounts when rate-limit windows are hit. Typical use cases include giving Visual Studio Code Copilot or JetBrains AI Assistant a zero-cost back-end, letting GitHub Actions pipelines generate commit messages, enabling university labs to benchmark frontier models from a single SDK, or allowing Chinese developers to route traffic through the newly added Qwen Code endpoint while still using English-language tooling. Because authentication is handled by the proxy—OAuth for Claude, API keys for OpenAI and simple stdin for Gemini—client applications remain unchanged when providers update their login flows; upgrading CLI Proxy API to 6.9.19 therefore instantly propagates new models and guardrails to every connected service. The software is available for free on get.nero.com, with downloads provided via trusted Windows package sources (e.g. winget), always delivering the latest version, and supporting batch installation of multiple applications.
Tags: